94 research outputs found

    A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection

    Get PDF
    We propose a new space-variant anisotropic regularisation term for variational image restoration, based on the statistical assumption that the gradients of the target image distribute locally according to a bivariate generalised Gaussian distribution. The highly flexible variational structure of the corresponding regulariser encodes several free parameters which hold the potential for faithfully modelling the local geometry in the image and describing local orientation preferences. For an automatic estimation of such parameters, we design a robust maximum likelihood approach and report results on its reliability on synthetic data and natural images. For the numerical solution of the corresponding image restoration model, we use an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM). A suitable preliminary variable splitting together with a novel result in multivariate non-convex proximal calculus yield a very efficient minimisation algorithm. Several numerical results showing significant quality-improvement of the proposed model with respect to some related state-of-the-art competitors are reported, in particular in terms of texture and detail preservation

    Analysis and optimisation of a variational model for mixed Gaussian and Salt & Pepper noise removal

    Get PDF
    We analyse a variational regularisation problem for mixed noise removal that was recently proposed in [14]. The data discrepancy term of the model combines L1 and L2 terms in an infimal convolution fashion and it is appropriate for the joint removal of Gaussian and Salt & Pepper noise. In this work we perform a finer analysis of the model which emphasises on the balancing effect of the two parameters appearing in the discrepancy term. Namely, we study the asymptotic behaviour of the model for large and small values of these parameters and we compare it to the corresponding variational models with L1 and L2 data fidelity. Furthermore, we compute exact solutions for simple data functions taking the total variation as regulariser. Using these theoretical results, we then analytically study a bilevel optimisation strategy for automatically selecting the parameters of the model by means of a training set. Finally, we report some numerical results on the selection of the optimal noise model via such strategy which confirm the validity of our analysis and the use of popular data models in the case of "blind'' model selection

    Beyond â„“1\ell_1 sparse coding in V1

    Full text link
    Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1\ell_1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1\ell_1 norm is highly suboptimal compared to other functions suited to approximating ℓq\ell_q with 0≤q<10 \leq q < 1 (including recently proposed Continuous Exact relaxations), both in terms of performance and in the production of features that are akin to signatures of the primary visual cortex. We show that ℓ1\ell_1 sparsity produces a denser code or employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. For all the penalty functions tested, a subset of the neurons develop orientation selectivity similarly to V1 neurons. When their code is sparse enough, the methods also develop receptive fields with varying functionalities, another signature of V1. Compared to other methods, soft thresholding achieves this level of sparsity at the expense of much degraded reconstruction performance, that more likely than not is not acceptable in biological vision. Our results indicate that V1 uses a sparsity inducing regularization that is closer to the ℓ0\ell_0 pseudo-norm rather than to the ℓ1\ell_1 norm

    Parameter-Free FISTA by Adaptive Restart and Backtracking

    Full text link
    We consider a combined restarting and adaptive backtracking strategy for the popular Fast Iterative Shrinking-Thresholding Algorithm frequently employed for accelerating the convergence speed of large-scale structured convex optimization problems. Several variants of FISTA enjoy a provable linear convergence rate for the function values F(xn)F(x_n) of the form O(e−Kμ/L n)\mathcal{O}( e^{-K\sqrt{\mu/L}~n}) under the prior knowledge of problem conditioning, i.e. of the ratio between the (\L ojasiewicz) parameter μ\mu determining the growth of the objective function and the Lipschitz constant LL of its smooth component. These parameters are nonetheless hard to estimate in many practical cases. Recent works address the problem by estimating either parameter via suitable adaptive strategies. In our work both parameters can be estimated at the same time by means of an algorithmic restarting scheme where, at each restart, a non-monotone estimation of LL is performed. For this scheme, theoretical convergence results are proved, showing that a O(e−Kμ/Ln)\mathcal{O}( e^{-K\sqrt{\mu/L}n}) convergence speed can still be achieved along with quantitative estimates of the conditioning. The resulting Free-FISTA algorithm is therefore parameter-free. Several numerical results are reported to confirm the practical interest of its use in many exemplar problems
    • …
    corecore